55 research outputs found

    Software redundancy: what, where, how

    Get PDF
    Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance

    First Responder Safety in the Event of a Dirty Bomb Detonation in Urban Environment

    Get PDF
    The malevolent dispersion of radioactive material, with the aim of contaminating people and the environment, is considered a credible terroristic threat. This article analyzes a hypothetical Dirty Bomb detonation in an urban area, estimating the radiological consequences to the involved population and to first responders. The dispersion of radioactive material is simulated using the HOTSPOT code, considering the explosion of devices containing (alternatively) 60Co, 137Cs, 192Ir, 238Pu or 241Am sources, frequently used in medical or industrial settings. Each source is evaluated separately. The resulting ground deposition is used to calculate the effective dose received by first responders in two different scenarios. Based on the dispersed radionuclide, the influence of the use of personal protective respirators is analyzed. Confirming previous published results, this article illustrates that the radioactive material is diluted by the detonation, resulting in relatively low doses to the general public. However, the emergency workers’ stay time in the most contaminated area must be carefully planned, in order to limit the received dose. Due to the general fear of radiation, extensive psychological effects are expected in the public, irrespective of the evaluated radiation dose

    Gene Expression Clustering and Selected Head and Neck Cancer Gene Signatures Highlight Risk Probability Differences in Oral Premalignant Lesions

    Get PDF
    BACKGROUND: Oral premalignant lesions (OPLs) represent the most common oral precancerous conditions. One of the major challenges in this field is the identification of OPLs at higher risk for oral squamous cell cancer (OSCC) development, by discovering molecular pathways deregulated in the early steps of malignant transformation. Analysis of deregulated levels of single genes and pathways has been successfully applied to head and neck squamous cell cancers (HNSCC) and OSCC with prognostic/predictive implications. Exploiting the availability of gene expression profile and clinical follow-up information of a well-characterized cohort of OPL patients, we aim to dissect tissue OPL gene expression to identify molecular clusters/signatures associated with oral cancer free survival (OCFS). MATERIALS AND METHODS: The gene expression data of 86 OPL patients were challenged with: an HNSCC specific 6 molecular subtypes model (Immune related: HPV related, Defense Response and Immunoreactive; Mesenchymal, Hypoxia and Classical); one OSCC-specific signature (13 genes); two metabolism-related signatures (3 genes and signatures raised from 6 metabolic pathways associated with prognosis in HNSCC and OSCC, respectively); a hypoxia gene signature. The molecular stratification and high versus low expression of the signatures were correlated with OCFS by Kaplan-Meier analyses. The association of gene expression profiles among the tested biological models and clinical covariates was tested through variance partition analysis. RESULTS: Patients with Mesenchymal, Hypoxia and Classical clusters showed an higher risk of malignant transformation in comparison with immune-related ones (log-rank test, p = 0.0052) and they expressed four enriched hallmarks: "TGF beta signaling" "angiogenesis", "unfolded protein response", "apical junction". Overall, 54 cases entered in the immune related clusters, while the remaining 32 cases belonged to the other clusters. No other signatures showed association with OCFS. Our variance partition analysis proved that clinical and molecular features are able to explain only 21% of gene expression data variability, while the remaining 79% refers to residuals independent of known parameters. CONCLUSIONS: Applying the existing signatures derived from HNSCC to OPL, we identified only a protective effect for immune-related signatures. Other gene expression profiles derived from overt cancers were not able to identify the risk of malignant transformation, possibly because they are linked to later stages of cancer progression. The availability of a new well-characterized set of OPL patients and further research is needed to improve

    Neoplastic lymphangiosis of the upper aerodigestive tract simulating field cancerization: histopathological analysis, surgical limits and literature review

    Get PDF
    Neoplastic lymphangiosis is defined as extensive embolic spread of cancer cells in the lymphatic vessels often without any evidence of a mass. Instead, field cancerization is defined by the presence of multifocal neoplastic lesions in a mucosal field previously exposed to mutagenic factors. In this case report, this oncological entity was suggested by the wide extent and multifocality of the disease and by the patient's exposure to risk factors. Instead, the pathological slides revealed the integrity of the mucosa and the presence of widespread embolic metastasis to lymphatic vessels. Thus, the diagnosis was changed to neoplastic lymphangiosis. This clinical presentation is a negative prognostic factor, and surgical treatment is ineffective because of the impossibility to obtain adequate free margins. The present case underlines the poor prognosis of such locally advanced cancer and the importance of recognizing it early so that the treatment approach can be adapted

    Molecular Immune-Inflammatory Connections between Dietary Fats and Atherosclerotic Cardiovascular Disease: Which Translation into Clinics?

    No full text
    Current guidelines recommend reducing the daily intake of dietary fats for the prevention of ischemic cardiovascular diseases (CVDs). Avoiding saturated fats while increasing the intake of mono- or polyunsaturated fatty acids has been for long time the cornerstone of dietary approaches in cardiovascular prevention, mainly due to the metabolic effects of these molecules. However, recently, this approach has been critically revised. The experimental evidence, in fact, supports the concept that the pro- or anti-inflammatory potential of different dietary fats contributes to atherogenic or anti-atherogenic cellular and molecular processes beyond (or in addition to) their metabolic effects. All these aspects are hardly translatable into clinics when trying to find connections between the pro-/anti-inflammatory potential of dietary lipids and their effects on CVD outcomes. Interventional trials, although providing stronger potential for causal inference, are typically small sample-sized, and they have short follow-up, noncompliance, and high attrition rates. Besides, observational studies are confounded by a number of variables and the quantification of dietary intakes is far from optimal. A better understanding of the anatomic and physiological barriers for the absorption and the players involved in the metabolism of dietary lipids (e.g., gut microbiota) might be an alternative strategy in the attempt to provide a first step towards a personalized dietary approach in CVD prevention

    Analysis of Power Processing Architectures for Thermoelectric Energy Harvesting

    No full text
    This paper analyzes, from a power processing architecture standpoint, the recovery of thermal energy waste by means of thermoelectric (TE) modules and arrays. The existence, in many industrial scenarios, of stable and often significant temperature gradients enables a number of possibilities for effective processing and recovery of waste heat, which could result in marked economic savings and environmental benefits if adopted on a large scale and systematically embedded into industrial processes. A review of the TE source is provided first, along with an extensive experimental characterization of commercial bismuth-telluride TE cells. Results indicate that maximum power extraction from TE generators can be achieved at a TE efficiency close to optimal, motivating the adoption of maximum power point tracking architectures, traditionally employed in photovoltaic systems, to TE plants as well. Three power processing architectures are then analyzed and compared in terms of their maximum power extraction capabilities and of the efficiency constraint they pose on the power processors. Different from the photovoltaic case, the simple series configuration of TE modules allows to extract most of the available power even in the presence of rather large mismatches among the modules. For even larger mismatch levels, the differential power processing (DPP) concept, already introduced for dc-dc distribution systems and photovoltaic plants, can be successfully adopted to improve power extraction. On the other hand, the module-integrated converter architecture, another well-established solution for photovoltaic sources, is found to be much less indicated for TE generators than the DPP solution. Main conclusions are experimentally validated using a DPP architecture with a two-cell test bed operated at different thermal gradients

    Black-Box Large-Signal Average Modeling of DC-DC Converters Using NARX-ANNs

    No full text
    This paper investigates the use of non-linear autoregressive exogenous (NARX) artificial neural networks (ANNs) to achieve black-box average dynamic models of dc-dc converters capable of capturing the main converter non-linearities. Non-linearities may include, for example, dynamic behavior variations due to changes of operating point or operating mode (e.g., discontinuous conduction mode, continuous conduction mode). This paper presents design guidelines for determining the NARX-ANN architecture and the dataset to be used in the training process. Dataset definition includes the choice of the perturbations for stimulating the aimed system behaviors and optimizations for dataset size reduction. The proposed approach is first derived for a dc-dc boost converter. To verify the generality of the proposed method, the same methodology is also applied to a Ćuk converter. In both cases, the proposed NARX-ANN modeling provided accurate results, with only limited deviations observed in the time-domain responses to step variations of duty-cycle and output current. The proposed model provided accurate small-signal behavior under different operating conditions. The validity of the approach is evaluated experimentally by considering a boost converter prototype
    • …
    corecore